Concave regularizations and MAP priors for sparse topic models

نویسنده

  • Johan H. Ugander
چکیده

Across all sectors of the modern information economy, large unstructured repositories of data are being aggregated at an ever-increasing rate. This move towards ‘big data’ has created an enormous demand for techniques to efficiently extract structure from such data sets. Specific contexts for this demand include natural language models for organizing text corpuses, image feature extraction models for navigating large photo datasets, and community detection in social networks for optimizing content delivery. Models of such structure are broadly called topic models or latent variable mixture models, aiming to identify maximally informative latent topics common to different elements of the unstructured dataset.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

FOCUSS-Based Dictionary Learning Algorithms

Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schurconcave (CSC) negative log-priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally ...

متن کامل

Consistent Discretization of Linear Inverse Problems using Sparse Stochastic Processes

We introduce a novel discretization paradigm and specify MAP estimators for linear inverse problems by using the theory of continuous-domain sparse stochastic processes. We characterize the complete class of admissible priors for the discretized version of the signal and show that the said class is restricted to the family of infinitely divisible distributions. We also explain the connections b...

متن کامل

Inverse Problem Regularization with Weak Decomposable Priors. Part I: Recovery Guarantees

This first talk is dedicated to assessing the theoretical recovery performance of this class of regularizers. We consider regularizations with convex positively 1homogenous functionals (in fact gauges) which obey a weak decomposability property. The weak decomposability will promote solutions of the inverse problem conforming to some notion of simplicity/low complexity by living on a low dimens...

متن کامل

Dictionary Learning Algorithms for Sparse Representation

Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally...

متن کامل

Learning Latent Variable Gaussian Graphical Model for Biomolecular Network with Low Sample Complexity

Learning a Gaussian graphical model with latent variables is ill posed when there is insufficient sample complexity, thus having to be appropriately regularized. A common choice is convex ℓ1 plus nuclear norm to regularize the searching process. However, the best estimator performance is not always achieved with these additive convex regularizations, especially when the sample complexity is low...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2010